Developing a large-scale institutional DSS designed to serve multiple managers in different business functions can be a more challenging task than that of developing the much more common one-user, one-function DSS that have evolved over the past few years. In this article we review some of the evidence suggesting that extra effort and rigor in the early planning and analysis stage of large-scale DSS development is worthwhile. We attempt to identify those characteristics of DSS that require different treatment than those available in traditional structured techniques. We then present, in the form of a case study, a hybrid technique which we refer to as DSA (Decision Support Analysis) which has been used effectively in developing large-scale institutional DSS. Finally, we discuss some of the positive and negative experiences that have emerged from using DSA.
Traditional project management and design methods used for data processing and MIS applications are ill-suited to decision support systems (DSS). The authors argue that effective management of DSS development requires: a) An explicit plan for the full development life cycle; b) Careful assignment of responsibility for DSS development; c) Appropriate user involvement and direction; and d) On-going user needs assessment and problem diagnosis. A 13-stage tactical plan for DSS development, called the DSS development life cycle, is described. Results are presented from an in-depth survey of users of 34 different DSS to show that the tasks performed most ineffectively in DSS development are planning, assessment of user needs, and system evaluation. Results from the survey are also presented that show the factors responsible for DSS project approval, and the factors responsible for DSS success.
Today managers and policy makers are confronted with an overwhelming range of choices of computer software to develop decision support systems (DSS). The authors argue that DSS language evaluation and selection should be a multi-step process involving most, if not all, of the following: 1. End User Needs Assessment and Problem Diagnosis 2. Critical Success Factor Identification 3. Feature Analysis and Capability Review 4. Demonstration Prototype Development 5. External User Surveys 6. Benchmark and Simulation Tests 7. Programmer Productivity and End User Orientation Analysis The objectives of each of these activities are described, as well as specific procedures for accomplishing them. In addition, the authors discuss the usefulness of a multi-disciplinary task force to accomplish the DSS language evaluation and selection process.